Skip to main content

Kubernetes Volumes configuration

danger

Note that all our PVs need to be ReadWriteMany if you use more than one node.

App specific FS

Most services require a number of directories, as well as a valid config.json file. Where possible, these are provisioned for each process individually, without sharing filesystems between services.

Config.json

Services require a config.json file to be mounted at /exivity/home/sytem/config.json to work. For services controlled by Merlin, these configs include the config for the merlin app in the container. Applications that don't rely on Merlin for their operation read the required config from the default config file named app-config-default. In either case these file are k8s native ConfigMap objects mounted to the correct path.

exivity-{{ app }}-logapp-config-defaultapp-config-{{ app }}
chronosxx
edifyxx
glass
griffonxx
horizonxx
pigeonxx
proximity-apixx
proximity-clixx
transcriptxx
usexx

To facilitate the inbuilt log viewing in the UI, all of the log volumes are mounted in the app they service, and in proximity-api.

Shared FS

Requirements

A number of services in the application use shared filesystems.

They need to be deployed with the volumes set to ReadWriteMany or scheduled to the same node. Ideally shared FS is only applied when strictly neccesary.

exivity-etl-configexivity-exportedexivity-extractedexivity-importexivity-report
chronos
edifyxxx
glass
griffon
horizon
pigeon
proximity-apixxxxx
proximity-clixxxxx
transcriptxxxxx
usexxxx

Accessmodes

Since the application requires shared volumes between services, any cluster that is suitable for a deployment must have at least on storageClass that allows for the ReadWriteMany accessmode on PersistentVolumes.

Google Kubernetes Engine (GKE)

Public cloud providers generally do not provision clusters with a storageclass with a ReadWriteMany accessmode by default. In the case of GKE, a solution in offered by way of a manage NFS solution named Filestore.

Here are several methods for using the Filestore (FS) storage in a GKE cluster, this is by now way an exhaustive list, there are many solutions.

  • Install the CSI driver and provision storage automatically with a storageclass. Every PV will get mounted to a single instance, minimum sizing is 1Ti. More expensive than options using only a single Filestore instance. The advantages are automation, backups, performace.
  • Add an in-cluster NFS mounted on a FS instance. After creating the FS instance, install the linked helm NFS subdir chart using the IP and path set for the FS instance on creation. The correct values can be found in the details page of the FS instance in the GKE console.

Networking

Zero Trust

Networking for an Exivity deployment follows the zero trust paradigm. Services can only communicate with eachother when communication has been explicitly allowed.